Picture for Bao Wang

Bao Wang

Improving Flow Matching by Aligning Flow Divergence

Add code
Jan 31, 2026
Viaarxiv icon

RMFlow: Refined Mean Flow by a Noise-Injection Step for Multimodal Generation

Add code
Jan 31, 2026
Viaarxiv icon

Towards Multiscale Graph-based Protein Learning with Geometric Secondary Structural Motifs

Add code
Jan 31, 2026
Viaarxiv icon

Learning Decentralized Swarms Using Rotation Equivariant Graph Neural Networks

Add code
Feb 26, 2025
Figure 1 for Learning Decentralized Swarms Using Rotation Equivariant Graph Neural Networks
Figure 2 for Learning Decentralized Swarms Using Rotation Equivariant Graph Neural Networks
Figure 3 for Learning Decentralized Swarms Using Rotation Equivariant Graph Neural Networks
Figure 4 for Learning Decentralized Swarms Using Rotation Equivariant Graph Neural Networks
Viaarxiv icon

Learning to Control the Smoothness of Graph Convolutional Network Features

Add code
Oct 18, 2024
Figure 1 for Learning to Control the Smoothness of Graph Convolutional Network Features
Figure 2 for Learning to Control the Smoothness of Graph Convolutional Network Features
Figure 3 for Learning to Control the Smoothness of Graph Convolutional Network Features
Figure 4 for Learning to Control the Smoothness of Graph Convolutional Network Features
Viaarxiv icon

Deep Learning with Data Privacy via Residual Perturbation

Add code
Aug 11, 2024
Viaarxiv icon

Adaptive and Implicit Regularization for Matrix Completion

Add code
Aug 11, 2022
Figure 1 for Adaptive and Implicit Regularization for Matrix Completion
Figure 2 for Adaptive and Implicit Regularization for Matrix Completion
Figure 3 for Adaptive and Implicit Regularization for Matrix Completion
Figure 4 for Adaptive and Implicit Regularization for Matrix Completion
Viaarxiv icon

Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization

Add code
Aug 01, 2022
Figure 1 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 2 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 3 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 4 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Viaarxiv icon

Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs

Add code
Apr 19, 2022
Figure 1 for Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs
Figure 2 for Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs
Figure 3 for Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs
Figure 4 for Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs
Viaarxiv icon

Learning POD of Complex Dynamics Using Heavy-ball Neural ODEs

Add code
Feb 24, 2022
Figure 1 for Learning POD of Complex Dynamics Using Heavy-ball Neural ODEs
Figure 2 for Learning POD of Complex Dynamics Using Heavy-ball Neural ODEs
Figure 3 for Learning POD of Complex Dynamics Using Heavy-ball Neural ODEs
Figure 4 for Learning POD of Complex Dynamics Using Heavy-ball Neural ODEs
Viaarxiv icon